Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
JMIR Serious Games ; 10(4): e38315, 2022 Oct 19.
Article in English | MEDLINE | ID: covidwho-2308865

ABSTRACT

BACKGROUND: In recent years, with the development of computer science and medical science, virtual reality (VR) technology has become a promising tool for improving cognitive function. Research on VR-based cognitive training has garnered increasing attention. OBJECTIVE: This study aimed to investigate the application status, research hot spots, and emerging trends of VR in cognitive rehabilitation over the past 20 years. METHODS: Articles on VR-based cognitive rehabilitation from 2001 to 2021 were retrieved from the Web of Science Core Collection. CiteSpace software was used for the visual analysis of authors and countries or regions, and Scimago Graphica software was used for the geographic visualization of published countries or regions. Keywords were clustered using the gCLUTO software. RESULTS: A total of 1259 papers were included. In recent years, research on the application of VR in cognitive rehabilitation has been widely conducted, and the annual publication of relevant literature has shown a positive trend. The main research areas include neuroscience and neurology, psychology, computer science, and rehabilitation. The United States ranked first with 328 papers, and Italy ranked second with 140 papers. Giuseppe Riva, an Italian academic, was the most prolific author with 29 publications. The most frequently cited reference was "Using Reality to Characterize Episodic Memory Profiles in Amnestic Mild Cognitive Impairment and Alzheimer's Disease: Influence of Active and Passive Encoding." The most common keywords used by researchers include "virtual reality," "cognition," "rehabilitation," "performance," and "older adult." The largest source of research funding is from the public sector in the United States. CONCLUSIONS: The bibliometric analysis provided an overview of the application of VR in cognitive rehabilitation. VR-based cognitive rehabilitation can be integrated into multiple disciplines. We conclude that, in the context of the COVID-19 pandemic, the development of VR-based telerehabilitation is crucial, and there are still many problems that need to be addressed, such as the lack of consensus on treatment methods and the existence of safety hazards.

2.
International Conference in Information Technology and Education, ICITED 2022 ; 320:555-565, 2023.
Article in English | Scopus | ID: covidwho-2282868

ABSTRACT

The purpose of the research is to analyze the higher academic offerings in communication and related fields in the Andean Community to determine if there is a relationship with the potential demands of adolescents who developed competencies to generate and exchange audiovisual content since the beginning of the Covid-19 pandemic. The research question is: should the academic curricula related to audiovisual and media competence of Andean universities change or be updated to meet a new profile of young people who know the audiovisual language? The methodology is qualitative and descriptive through content analysis of the academic curricula of the leading universities in the international rankings that classify Latin American universities, as well as semi-structured interviews with experts in higher education in communication in the region. In the case of communication studies and related areas, it happens that young people, before entering university, learn autonomously many of the techniques, processes and audiovisual languages that feed the contents of the current offer, but it is an instruction acquired without the corresponding social contexts, deontological senses and responsibilities. The denominations of the titles of the careers are classic, the formation of media competence is not alluded to or implicitly assumed, the same happens with the names of the courses taught, they respond to common or generic programs of professional formation with emphasis on journalistic production and audiovisual contents. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

3.
26th International Conference Information Visualisation, IV 2022 ; 2022-July:385-392, 2022.
Article in English | Scopus | ID: covidwho-2231008

ABSTRACT

Coronary heart disease (CHD) remains the leading cause of premature death worldwide. Better risk stratification tools and personalized care of patients are needed for reducing the morbidity and mortality of CHD and the associated economic burden. However, contemporary e-learning solutions lack personalization and shared decision making and as a result, overwhelm patients with large amounts of information. CoroPrevention is a multiyear, EU-funded Horizon 2020 research project aiming to shape and implement a personalized secondary prevention strategy for patients with established CHD. As a part of the project, new digital tools will also be validated. In this paper, we discuss the process of creating audio-visual content for the CoroPrevention mobile application during the challenging COVID-19 pandemic. © 2022 IEEE.

4.
45th Mexican Conference on Biomedical Engineering, CNIB 2022 ; 86:860-870, 2023.
Article in English | Scopus | ID: covidwho-2148594

ABSTRACT

In recent years, the education has been influenced by the implementation of new learning strategies due to the confinement caused by the spread of COVID-19. Teachers adopted audiovisual resources that allow them to teach their classes remotely. However, the transition to this modality has been sudden, forced, and evolving “on the fly" in response to the pedagogical adaptations. A structured implementation of audiovisual media in education is required, since it represents an important part of access to information in our times. For this goal, we propose strategies for the development of complementary offline and online audiovisual content to help teach practical courses in a Biomedical Engineering bachelor program. In particular, we present content created for a Medical Imaging Systems course. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

5.
31st ACM World Wide Web Conference, WWW 2022 ; : 3623-3631, 2022.
Article in English | Scopus | ID: covidwho-1861669

ABSTRACT

This paper focuses on a critical problem of explainable multimodal COVID-19 misinformation detection where the goal is to accurately detect misleading information in multimodal COVID-19 news articles and provide the reason or evidence that can explain the detection results. Our work is motivated by the lack of judicious study of the association between different modalities (e.g., text and image) of the COVID-19 news content in current solutions. In this paper, we present a generative approach to detect multimodal COVID-19 misinformation by investigating the cross-modal association between the visual and textual content that is deeply embedded in the multimodal news content. Two critical challenges exist in developing our solution: 1) how to accurately assess the consistency between the visual and textual content of a multimodal COVID-19 news article? 2) How to effectively retrieve useful information from the unreliable user comments to explain the misinformation detection results? To address the above challenges, we develop a duo-generative explainable misinformation detection (DGExplain) framework that explicitly explores the cross-modal association between the news content in different modalities and effectively exploits user comments to detect and explain misinformation in multimodal COVID-19 news articles. We evaluate DGExplain on two real-world multimodal COVID-19 news datasets. Evaluation results demonstrate that DGExplain significantly outperforms state-of-the-art baselines in terms of the accuracy of multimodal COVID-19 misinformation detection and the explainability of detection explanations. © 2022 ACM.

6.
9th International Conference on Big Data Analytics, BDA 2021 ; 13167 LNCS:201-208, 2022.
Article in English | Scopus | ID: covidwho-1750588

ABSTRACT

With the ever-increasing internet penetration across the world, there has been a huge surge in the content on the worldwide web. Video has proven to be one of the most popular media. The COVID-19 pandemic has further pushed the envelope, forcing learners to turn to E-Learning platforms. In the absence of relevant descriptions of these videos, it becomes imperative to generate metadata based on the content of the video. In the current paper, an attempt has been made to index videos based on the visual and audio content of the video. The visual content is extracted using an Optical Character Recognition (OCR) on the stack of frames obtained from a video while the audio content is generated using an Automatic Speech Recognition (ASR). The OCR and ASR generated texts are combined to obtain the final description of the respective video. The dataset contains 400 videos spread across 4 genres. To quantify the accuracy of our descriptions, clustering is performed using the video description to discern between the genres of video. © 2022, Springer Nature Switzerland AG.

7.
2021 IEEE International Conference on Big Data, Big Data 2021 ; : 899-908, 2021.
Article in English | Scopus | ID: covidwho-1730897

ABSTRACT

This paper studies an emerging and important problem of identifying misleading COVID-19 short videos where the misleading content is jointly expressed in the visual, audio, and textual content of videos. Existing solutions for misleading video detection mainly focus on the authenticity of videos or audios against AI algorithms (e.g., deepfake) or video manipulation, and are insufficient to address our problem where most videos are user-generated and intentionally edited. Two critical challenges exist in solving our problem: i) how to effectively extract information from the distractive and manipulated visual content in TikTok videos? ii) How to efficiently aggregate heterogeneous information across different modalities in short videos? To address the above challenges, we develop TikTec, a multimodal misinformation detection framework that explicitly exploits the captions to accurately capture the key information from the distractive video content, and effectively learns the composed misinformation that is jointly conveyed by the visual and audio content. We evaluate TikTec on a real-world COVID- 19 video dataset collected from TikTok. Evaluation results show that TikTec achieves significant performance gains compared to state-of-the-art baselines in accurately detecting misleading COVID-19 short videos. © 2021 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL